Doob martingale

A Doob martingale (also known as a Levy martingale) is a mathematical construction of a stochastic process which approximates a given random variable and has the martingale property with respect to the given filtration. It may be thought of as the evolving sequence of best approximations to the random variable based on information accumulated up to a certain time.

When analyzing sums, random walks, or other additive functions of independent random variables, one can often apply the central limit theorem, law of large numbers, Chernoff's inequality, Chebyshev's inequality or similar tools. When analyzing similar objects where the differences are not independent, the main tools are martingales and Azuma's inequality.

Contents

Definition

A Doob martingale (named after J. L. Doob) is a generic construction that is always a martingale. Specifically, consider any set of random variables

\vec{X}=X_1, X_2, ..., X_n

taking values in a set A for which we are interested in the function f:A^n \to \Bbb{R} and define:

B_i=E_{X_{i%2B1},X_{i%2B2},...,X_{n}}[f(\vec{X})|X_{1},X_{2},...X_{i}]

where the above expectation is itself a random quantity since the expectation is only taken over

X_{i%2B1},X_{i%2B2},...,X_{n},

and

X_{1},X_{2},...X_{i}

are treated as random variables. It is possible to show that B_i is always a martingale regardless of the properties of X_i. Thus if one can bound the differences

|B_{i%2B1}-B_i|,

one can apply Azuma's inequality and show that with high probability f(\vec{X}) is concentrated around its expected value

E[f(\vec{X})]=B_0.

McDiarmid's inequality

One common way of bounding the differences and applying Azuma's inequality to a Doob martingale is called McDiarmid's inequality. Suppose X_1, X_2, \dots, X_n are independent and assume that f satisfies

\sup_{x_1,x_2,\dots,x_n, \hat x_i} |f(x_1,x_2,\dots,x_n) - f(x_1,x_2,\dots,x_{i-1},\hat x_i, x_{i%2B1}, \dots, x_n)| 
\le c_i \qquad \text{for} \quad 1 \le i \le n \; .

(In other words, replacing the i-th coordinate x_i by some other value changes the value of f by at most c_i.)

It follows that

|B_{i%2B1}-B_i| \le c_i

and therefore Azuma's inequality yields the following McDiarmid inequalities for any \varepsilon > 0:


\Pr \left\{ f(X_1, X_2, \dots, X_n) - E[f(X_1, X_2, \dots, X_n)] \ge \varepsilon \right\} 
\le 
\exp \left( - \frac{2 \varepsilon^2}{\sum_{i=1}^n c_i^2} \right)

and


\Pr \left\{ E[f(X_1, X_2, \dots, X_n)] - f(X_1, X_2, \dots, X_n) \ge \varepsilon \right\} 
\le 
\exp \left( - \frac{2 \varepsilon^2}{\sum_{i=1}^n c_i^2} \right)

and


\Pr \left\{ |E[f(X_1, X_2, \dots, X_n)] - f(X_1, X_2, \dots, X_n)| \ge \varepsilon \right\} 
\le 2 \exp \left( - \frac{2 \varepsilon^2}{\sum_{i=1}^n c_i^2} \right). \;

See also

References